1,248 research outputs found

    Objective Evaluation Criteria for Shooting Quality of Stereo Cameras over Short Distance

    Get PDF
    Stereo cameras are the basic tools used to obtain stereoscopic image pairs, which can lead to truly great image quality. However, some inappropriate shooting conditions may cause discomfort while viewing stereo images. It is therefore considerably necessary to establish the perceptual criteria that can be used to evaluate the shooting quality of stereo cameras. This article proposes objective quality evaluation criteria based on the characteristics of parallel and toed-in camera configurations. Considering the different internal structures and basic shooting principles, this paper focuses on short-distance shooting conditions and establishes assessment criteria for both parallel and toed-in camera configurations. Experimental results show that the proposed evaluation criteria can predict the visual perception of stereoscopic images and effectively evaluate stereoscopic image quality

    Deep spectral learning for label-free optical imaging oximetry with uncertainty quantification

    Get PDF
    Measurement of blood oxygen saturation (sO2) by optical imaging oximetry provides invaluable insight into local tissue functions and metabolism. Despite different embodiments and modalities, all label-free optical-imaging oximetry techniques utilize the same principle of sO2-dependent spectral contrast from haemoglobin. Traditional approaches for quantifying sO2 often rely on analytical models that are fitted by the spectral measurements. These approaches in practice suffer from uncertainties due to biological variability, tissue geometry, light scattering, systemic spectral bias, and variations in the experimental conditions. Here, we propose a new data-driven approach, termed deep spectral learning (DSL), to achieve oximetry that is highly robust to experimental variations and, more importantly, able to provide uncertainty quantification for each sO2 prediction. To demonstrate the robustness and generalizability of DSL, we analyse data from two visible light optical coherence tomography (vis-OCT) setups across two separate in vivo experiments on rat retinas. Predictions made by DSL are highly adaptive to experimental variabilities as well as the depth-dependent backscattering spectra. Two neural-network-based models are tested and compared with the traditional least-squares fitting (LSF) method. The DSL-predicted sO2 shows significantly lower mean-square errors than those of the LSF. For the first time, we have demonstrated en face maps of retinal oximetry along with a pixel-wise confidence assessment. Our DSL overcomes several limitations of traditional approaches and provides a more flexible, robust, and reliable deep learning approach for in vivo non-invasive label-free optical oximetry.R01 CA224911 - NCI NIH HHS; R01 CA232015 - NCI NIH HHS; R01 NS108464 - NINDS NIH HHS; R21 EY029412 - NEI NIH HHSAccepted manuscrip

    Towards Optimal Discrete Online Hashing with Balanced Similarity

    Full text link
    When facing large-scale image datasets, online hashing serves as a promising solution for online retrieval and prediction tasks. It encodes the online streaming data into compact binary codes, and simultaneously updates the hash functions to renew codes of the existing dataset. To this end, the existing methods update hash functions solely based on the new data batch, without investigating the correlation between such new data and the existing dataset. In addition, existing works update the hash functions using a relaxation process in its corresponding approximated continuous space. And it remains as an open problem to directly apply discrete optimizations in online hashing. In this paper, we propose a novel supervised online hashing method, termed Balanced Similarity for Online Discrete Hashing (BSODH), to solve the above problems in a unified framework. BSODH employs a well-designed hashing algorithm to preserve the similarity between the streaming data and the existing dataset via an asymmetric graph regularization. We further identify the "data-imbalance" problem brought by the constructed asymmetric graph, which restricts the application of discrete optimization in our problem. Therefore, a novel balanced similarity is further proposed, which uses two equilibrium factors to balance the similar and dissimilar weights and eventually enables the usage of discrete optimizations. Extensive experiments conducted on three widely-used benchmarks demonstrate the advantages of the proposed method over the state-of-the-art methods.Comment: 8 pages, 11 figures, conferenc

    The Proximal Operator of the Piece-wise Exponential Function and Its Application in Compressed Sensing

    Full text link
    This paper characterizes the proximal operator of the piece-wise exponential function 1β€‰β£βˆ’β€‰β£eβˆ’βˆ£x∣/Οƒ1\!-\!e^{-|x|/\sigma} with a given shape parameter σ ⁣> ⁣0\sigma\!>\!0, which is a popular nonconvex surrogate of β„“0\ell_0-norm in support vector machines, zero-one programming problems, and compressed sensing, etc. Although Malek-Mohammadi et al. [IEEE Transactions on Signal Processing, 64(21):5657--5671, 2016] once worked on this problem, the expressions they derived were regrettably inaccurate. In a sense, it was lacking a case. Using the Lambert W function and an extensive study of the piece-wise exponential function, we have rectified the formulation of the proximal operator of the piece-wise exponential function in light of their work. We have also undertaken a thorough analysis of this operator. Finally, as an application in compressed sensing, an iterative shrinkage and thresholding algorithm (ISTA) for the piece-wise exponential function regularization problem is developed and fully investigated. A comparative study of ISTA with nine popular non-convex penalties in compressed sensing demonstrates the advantage of the piece-wise exponential penalty

    On Choosing Initial Values of Iteratively Reweighted β„“1\ell_1 Algorithms for the Piece-wise Exponential Penalty

    Full text link
    Computing the proximal operator of the sparsity-promoting piece-wise exponential (PiE) penalty 1βˆ’eβˆ’βˆ£x∣/Οƒ1-e^{-|x|/\sigma} with a given shape parameter Οƒ>0\sigma>0, which is treated as a popular nonconvex surrogate of β„“0\ell_0-norm, is fundamental in feature selection via support vector machines, image reconstruction, zero-one programming problems, compressed sensing, etc. Due to the nonconvexity of PiE, for a long time, its proximal operator is frequently evaluated via an iteratively reweighted β„“1\ell_1 algorithm, which substitutes PiE with its first-order approximation, however, the obtained solutions only are the critical point. Based on the exact characterization of the proximal operator of PiE, we explore how the iteratively reweighted β„“1\ell_1 solution deviates from the true proximal operator in certain regions, which can be explicitly identified in terms of Οƒ\sigma, the initial value and the regularization parameter in the definition of the proximal operator. Moreover, the initial value can be adaptively and simply chosen to ensure that the iteratively reweighted β„“1\ell_1 solution belongs to the proximal operator of PiE
    • …
    corecore